41 research outputs found

    From virtual demonstration to real-world manipulation using LSTM and MDN

    Full text link
    Robots assisting the disabled or elderly must perform complex manipulation tasks and must adapt to the home environment and preferences of their user. Learning from demonstration is a promising choice, that would allow the non-technical user to teach the robot different tasks. However, collecting demonstrations in the home environment of a disabled user is time consuming, disruptive to the comfort of the user, and presents safety challenges. It would be desirable to perform the demonstrations in a virtual environment. In this paper we describe a solution to the challenging problem of behavior transfer from virtual demonstration to a physical robot. The virtual demonstrations are used to train a deep neural network based controller, which is using a Long Short Term Memory (LSTM) recurrent neural network to generate trajectories. The training process uses a Mixture Density Network (MDN) to calculate an error signal suitable for the multimodal nature of demonstrations. The controller learned in the virtual environment is transferred to a physical robot (a Rethink Robotics Baxter). An off-the-shelf vision component is used to substitute for geometric knowledge available in the simulation and an inverse kinematics module is used to allow the Baxter to enact the trajectory. Our experimental studies validate the three contributions of the paper: (1) the controller learned from virtual demonstrations can be used to successfully perform the manipulation tasks on a physical robot, (2) the LSTM+MDN architectural choice outperforms other choices, such as the use of feedforward networks and mean-squared error based training signals and (3) allowing imperfect demonstrations in the training set also allows the controller to learn how to correct its manipulation mistakes

    “I Want That”: Human-in-the-Loop Control of a Wheelchair-Mounted Robotic Arm

    Get PDF
    Wheelchair-mounted robotic arms have been commercially available for a decade. In order to operate these robotic arms, a user must have a high level of cognitive function. Our research focuses on replacing a manufacturer-provided, menu-based interface with a vision-based system while adding autonomy to reduce the cognitive load. Instead of manual task decomposition and execution, the user explicitly designates the end goal, and the system autonomously retrieves the object. In this paper, we present the complete system which can autonomously retrieve a desired object from a shelf. We also present the results of a 15-week study in which 12 participants from our target population used our system, totaling 198 trials

    A Constrained Linear Approach To Identify A Multi-Timescale Adaptive Threshold Neuronal Model

    No full text
    This paper is focused on the parameter estimation problem of a multi-timescale adaptive threshold (MAT) neuronal model. Using the dynamics of a non-resetting leaky integrator equipped with an adaptive threshold, a constrained iterative linear least squares method is implemented to fit the model to the reference data. Through manipulation of the system dynamics, the threshold voltage can be obtained as a realizable model that is linear in the unknown parameters. This linearly parametrized realizable model is then utilized inside a prediction error based framework to identify the threshold parameters with the purpose of predicting single neuron precise firing times. This estimation scheme is evaluated using both synthetic data obtained from an exact model as well as the experimental data obtained from in vitro rat somatosensory cortical neurons. Results show the ability of this approach to fit the MAT model to different types of reference data

    Bounds On The Smallest Eigenvalue Of A Pinned Laplacian Matrix

    No full text
    In this note, we study a networked system with single/multiple pinning. Given a weighted and undirected network, we derive lower and upper bounds on its algebraic connectivity with respect to the reference signal. The bounds are derived by partitioning the network in terms of distance of each node from the pinning set. Upper and lower bounds for two networks with differing topologies are computed to demonstrate the tightness of the derived results. It is shown, using the derived bounds, how requirements on the number of pinning nodes and pinning gain required for achieving stability or a specified convergence rate for the network can be easily obtained

    Implementation Of Feeding Task Via Learning From Demonstration

    No full text
    In this paper, a Learning From Demonstration (LFD) approach is used to design an autonomous meal-assistant agent. The feeding task is modeled as a mixture of Gaussian distributions. Using the data collected via kinesthetic teaching, the parameters of Gaussian Mixture Model (GMM) are learned using Gaussian Mixture Regression (GMR) and Expectation Maximization (EM) algorithm. Reproduction of feeding trajectories for different environments is obtained by solving a constrained optimization problem. In this method we show that obstacles can be avoided by robot\u27s end-effector by adding a set of extra constraints to the optimization problem. Finally, the performance of the designed meal assistant is evaluated in two feeding scenario experiments: one considering obstacles in the path between the bowl and the mouth and the other without

    From Single 2D Depth Image to Gripper 6D Pose Estimation: A Fast and Robust Algorithm for Grabbing Objects in Cluttered Scenes

    No full text
    In this paper, we investigate the problem of grasping previously unseen objects in unstructured environments which are cluttered with multiple objects. Object geometry, reachability, and force-closure analysis are considered to address this problem. A framework is proposed for grasping unknown objects by localizing contact regions on the contours formed by a set of depth edges generated from a single-view 2D depth image. Specifically, contact regions are determined based on edge geometric features derived from analysis of the depth map data. Finally, the performance of the approach is successfully validated by applying it to scenes with both single and multiple objects, in both simulation and experiments. Using sequential processing in MATLAB running on a 4th-generation Intel Core Desktop, simulation results with the benchmark Object Segmentation Database show that the algorithm takes 281 ms on average to generate the 6D robot pose needed to attach with a pair of viable grasping edges that satisfy reachability and force-closure conditions. Experimental results in the Assistive Robotics Laboratory at UCF using a Kinect One sensor and a Baxter manipulator outfitted with a standard parallel gripper showcase the feasibility of the approach in grasping previously unseen objects from uncontrived multi-object settings

    Human-In-The-Loop Control Of An Assistive Robotic Arm In Unstructured Environments For Spinal Cord Injured Users

    No full text
    We describe the progress in implementing a vision based robotic assist device to facilitate Activities of Daily Living (ADL) tasks for a class of users with motor disabilities. The goal of the research is to reduce time to task completion and cognitive burden for users interacting with an unstructured environment via a Wheelchair Mounted Robotic Arm (WMRA). A developed robot system is tested with five healthy subjects to assess its usefulness
    corecore